73 research outputs found

    Ants use a predictive mechanism to compensate for passive displacements by wind

    Get PDF
    SummaryInsect navigation is a fruitful system for analysing the ingenious and economical mechanisms that can underlie complex behaviour [1]. Past work has highlighted unsuspected navigational abilities in ants and bees, such as accurate path integration, long distance route following or homing from novel locations [2]. Here we report that ants can deal with one of the greatest challenges for any navigator: uncontrolled passive displacements. Foraging ants were blown by a jet of air over 3 meters into a dark pit. When subsequently released at windless unfamiliar locations, ants headed in the compass direction opposite to the one they had been blown away, thus functionally increasing their chance of returning to familiar areas. Ants do not appear to collect directional information during the actual passive displacement, but beforehand, while clutching the ground to resist the wind. During that time window, ants compute and store the compass direction of the wind. This is achieved by integrating the egocentric perception of the wind direction relative to their current body-axis with celestial compass information from their eyes

    How do field of view and resolution affect the information content of panoramic scenes for visual navigation? A computational investigation

    Get PDF
    The visual systems of animals have to provide information to guide behaviour and the informational requirements of an animal’s behavioural repertoire are often reflected in its sensory system. For insects, this is often evident in the optical array of the compound eye. One behaviour that insects share with many animals is the use of learnt visual information for navigation. As ants are expert visual navigators it may be that their vision is optimised for navigation. Here we take a computational approach in asking how the details of the optical array influence the informational content of scenes used in simple view matching strategies for orientation. We find that robust orientation is best achieved with low-resolution visual information and a large field of view, similar to the optical properties seen for many ant species. A lower resolution allows for a trade-off between specificity and generalisation for stored views. Additionally, our simulations show that orientation performance increases if different portions of the visual field are considered as discrete visual sensors, each giving an independent directional estimate. This suggests that ants might benefit by processing information from their two eyes independently

    Crucial role of ultraviolet light for desert ants in determining direction from the terrestrial panorama

    Get PDF
    Ants use the panoramic skyline in part to determine a direction of travel. A theoretically elegant way to define where terrestrial objects meet the sky is to use an opponent-process channel contrasting green wavelengths of light with ultraviolet (UV) wavelengths. Compared with the sky, terrestrial objects reflect relatively more green wavelengths. Using such an opponent-process channel gains constancy in the face of changes in overall illumination level. We tested the use of UV wavelengths in desert ants by using a plastic that filtered out most of the energy below 400 nm. Ants, Melophorus bagoti, were trained to home with an artificial skyline provided by an arena (experiment 1) or with the natural panorama (experiment 2). On a test, a homing ant was captured just before she entered her nest, and then brought back to a replicate arena (experiment 1) or the starting point (the feeder, experiment 2) and released. Blocking UV light led to deteriorations in orientation in both experiments. When the artificial skyline was changed from opaque to transparent UV-blocking plastic (experiment 3) on the other hand, the ants were still oriented. We conclude that UV wavelengths play a crucial role in determining direction based on the terrestrial surround.10 page(s

    How variation in head pitch could affect image matching algorithms for ant navigation

    Get PDF
    Desert ants are a model system for animal navigation, using visual memory to follow long routes across both sparse and cluttered environments. Most accounts of this behaviour assume retinotopic image matching, e.g. recovering heading direction by finding a minimum in the image difference function as the viewpoint rotates. But most models neglect the potential image distortion that could result from unstable head motion. We report that for ants running across a short section of natural substrate, the head pitch varies substantially: by over 20 degrees with no load; and 60 degrees when carrying a large food item. There is no evidence of head stabilisation. Using a realistic simulation of the ant’s visual world, we demonstrate that this range of head pitch significantly degrades image matching. The effect of pitch variation can be ameliorated by a memory bank of densely sampled along a route so that an image sufficiently similar in pitch and location is available for comparison. However, with large pitch disturbance, inappropriate memories sampled at distant locations are often recalled and navigation along a route can be adversely affected. Ignoring images obtained at extreme pitches, or averaging images over several pitches, does not significantly improve performance

    Rotation invariant visual processing for spatial memory in insects

    Get PDF
    Visual memory is crucial to navigation in many animals, including insects. Here, we focus on the problem of visual homing, that is, using comparison of the view at a current location with a view stored at the home location to control movement towards home by a novel shortcut. Insects show several visual specializations that appear advantageous for this task, including almost panoramic field of view and ultraviolet light sensitivity, which enhances the salience of the skyline. We discuss several proposals for subsequent processing of the image to obtain the required motion information, focusing on how each might deal with the problem of yaw rotation of the current view relative to the home view. Possible solutions include tagging of views with information from the celestial compass system, using multiple views pointing towards home, or rotation invariant encoding of the view. We illustrate briefly how a well-known shape description method from computer vision, Zernike moments, could provide a compact and rotation invariant representation of sky shapes to enhance visual homing. We discuss the biological plausibility of this solution, and also a fourth strategy, based on observed behaviour of insects, that involves transfer of information from visual memory matching to the compass system

    How Ants Use Vision When Homing Backward

    Get PDF
    Ants can navigate over long distances between their nest and food sites using visual cues [1, 2]. Recent studies show that this capacity is undiminished when walking backward while dragging a heavy food item [3, 4, 5]. This challenges the idea that ants use egocentric visual memories of the scene for guidance [1, 2, 6]. Can ants use their visual memories of the terrestrial cues when going backward? Our results suggest that ants do not adjust their direction of travel based on the perceived scene while going backward. Instead, they maintain a straight direction using their celestial compass. This direction can be dictated by their path integrator [5] but can also be set using terrestrial visual cues after a forward peek. If the food item is too heavy to enable body rotations, ants moving backward drop their food on occasion, rotate and walk a few steps forward, return to the food, and drag it backward in a now-corrected direction defined by terrestrial cues. Furthermore, we show that ants can maintain their direction of travel independently of their body orientation. It thus appears that egocentric retinal alignment is required for visual scene recognition, but ants can translate this acquired directional information into a holonomic frame of reference, which enables them to decouple their travel direction from their body orientation and hence navigate backward. This reveals substantial flexibility and communication between different types of navigational information: from terrestrial to celestial cues and from egocentric to holonomic directional memories

    Route following without scanning

    Get PDF
    Desert ants are expert navigators, foraging over large distances using visually guided routes. Recent models of route following can reproduce aspects of route guidance, yet the underlying motor patterns do not reflect those of foraging ants. Specifically, these models select the direction of movement by rotating to find the most familiar view. Yet scanning patterns are only occasionally observed in ants. We propose a novel route following strategy inspired by klinokinesis. By using familiarity of the view to modulate the magnitude of alternating left and right turns, and the size of forward steps, this strategy is able to continually correct the heading of a simulated ant to maintain its course along a route. Route following by klinokinesis and visual compass are evaluated against real ant routes in a simulation study and on a mobile robot in the real ant habitat. We report that in unfamiliar surroundings the proposed method can also generate ant-like scanning behaviours

    Insect-Inspired Navigation Algorithm for an Aerial Agent Using Satellite Imagery

    Get PDF
    Humans have long marveled at the ability of animals to navigate swiftly, accurately, and across long distances. Many mechanisms have been proposed for how animals acquire, store, and retrace learned routes, yet many of these hypotheses appear incongruent with behavioral observations and the animals’ neural constraints. The “Navigation by Scene Familiarity Hypothesis” proposed originally for insect navigation offers an elegantly simple solution for retracing previously experienced routes without the need for complex neural architectures and memory retrieval mechanisms. This hypothesis proposes that an animal can return to a target location by simply moving toward the most familiar scene at any given point. Proof of concept simulations have used computer-generated ant’s-eye views of the world, but here we test the ability of scene familiarity algorithms to navigate training routes across satellite images extracted from Google Maps. We find that Google satellite images are so rich in visual information that familiarity algorithms can be used to retrace even tortuous routes with low-resolution sensors. We discuss the implications of these findings not only for animal navigation but also for the potential development of visual augmentation systems and robot guidance algorithms.Ye

    Using deep autoencoders to investigate image matching in visual navigation

    Get PDF
    This paper discusses the use of deep autoencoder networks to find a compressed representation of an image, which can be used for visual naviga-tion. Images reconstructed from the compressed representation are tested to see if they retain enough information to be used as a visual compass (in which an image is matched with another to recall a bearing/movement direction) as this ability is at the heart of a visual route navigation algorithm. We show that both reconstructed images and compressed representations from different layers of the autoencoder can be used in this way, suggesting that a compact image code is sufficient for visual navigation and that deep networks hold promise for find-ing optimal visual encodings for this task

    Spontaneous Reorientation Is Guided by Perceived Surface Distance, Not by Image Matching Or Comparison

    Get PDF
    Humans and animals recover their sense of position and orientation using properties of the surface layout, but the processes underlying this ability are disputed. Although behavioral and neurophysiological experiments on animals long have suggested that reorientation depends on representations of surface distance, recent experiments on young children join experimental studies and computational models of animal navigation to suggest that reorientation depends either on processing of any continuous perceptual variables or on matching of 2D, depthless images of the landscape. We tested the surface distance hypothesis against these alternatives through studies of children, using environments whose 3D shape and 2D image properties were arranged to enhance or cancel impressions of depth. In the absence of training, children reoriented by subtle differences in perceived surface distance under conditions that challenge current models of 2D-image matching or comparison processes. We provide evidence that children’s spontaneous navigation depends on representations of 3D layout geometry.National Institutes of Health (U.S.) (Grant HD 23103
    corecore